Rediscovering wget
I used to run wget, an old command line Unix utility program, to “get” things from one server and drop them on another. I am pretty sure that it came with an old version MS-DOS or Windows.
I am using wget to clone a copy of my JT30.COM website from the internet down onto my local disk. The command line (sorry no window for wget) is
wget --mirror --convert-links --adjust-extension --page-requisites --no-parent https://site-to-download.com
I am grabbing JT30.COM so that that I can use my local disk to host my websites and this is the next one on my list. I am using wget in order get “flat files” only because the original site uses the PHP programming language to fix files as they are requested. My goal is to have no programming at all on any of my sites. So far wget has copied more than 22,000 files and shows no sign of stopping. JT30 was a big website and I wrote or collected lots of information on how to play blues harmonica.
At one point I had split off several website into their own domains because JT30.COM was getting too big. Since then I have recombined the websites into one big one. I had HARPAMPS.COM, HARPLOG.COM, HARPL.COM and probably others that I don’t remember. They are all together now, though.
Wget has been running for a half an hour and shows no sign of slowing down.
The next site that I get will be BLOGSEYE.COM which was my programming blog for many years. It has a few thousand blog entries, but will be easier to work with since I gave up the domain a long time ago (it is now registered with a domain squatter so I can’t get it back). I have a zip of the last version of the site. I will probably be adding it here to my KPGraham.com domain.